Partitioning Networks with Node Attributes by Compressing Information Flow
نویسندگان
چکیده
منابع مشابه
Compressing networks with super nodes
Community detection is a commonly used technique for identifying groups in a network based on similarities in connectivity patterns. To facilitate community detection in large networks, we recast the network to be partitioned into a smaller network of ‘super nodes’, each super node comprising one or more nodes in the original network. To define the seeds of our super nodes, we apply the ‘CoreHD...
متن کاملCompressing Information
If the original file contained audio, or an image, we may not be worried about recovering all the bits of the original file, we just want whatever we recover to sound or look the same as the original. As we pointed out at the end of the last section, the computer files you customarily use to store images and sounds contain far fewer bytes than the corresponding bitmap and wave files we dealt wi...
متن کاملLearning Diffusion Probability Based on Node Attributes in Social Networks
Information diffusion over a social network is analyzed by modeling the successive interactions of neighboring nodes as probabilistic processes of state changes. We address the problem of estimating parameters (diffusion probability and time-delay parameter) of the probabilistic model as a function of the node attributes from the observed diffusion data by formulating it as the maximum likeliho...
متن کاملCompressing Information of Target Tracking in Wireless Sensor Networks
Target tracking is a well studied topic in wireless sensor networks. It is a procedure that nodes in the network collaborate in detecting targets and transmitting their information to the base-station continuously, which leads to data implosion and redundancy. To reduce traffic load of the network, a data compressing based target tracking protocol is proposed in this work. It first incorporates...
متن کاملCompressing Neural Networks using the Variational Information Bottleneck
Neural networks can be compressed to reduce memory and computational requirements, or to increase accuracy by facilitating the use of a larger base architecture. In this paper we focus on pruning individual neurons, which can simultaneously trim model size, FLOPs, and run-time memory. To improve upon the performance of existing compression algorithms we utilize the information bottleneck princi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM Transactions on Knowledge Discovery from Data
سال: 2016
ISSN: 1556-4681,1556-472X
DOI: 10.1145/2968451